DE eng

Search in the Catalogues and Directories

Hits 1 – 11 of 11

1
Learning How to Translate North Korean through South Korean ...
BASE
Show details
2
Joint Optimization of Tokenization and Downstream Model ...
BASE
Show details
3
Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language berts ...
BASE
Show details
4
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
5
Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language berts
In: Transactions of the Association for Computational Linguistics, 9 (2021)
Abstract: Large-scale pretraining and task-specific fine-tuning is now the standard methodology for many tasks in computer vision and natural language processing. Recently, a multitude of methods have been proposed for pretraining vision and language BERTs to tackle challenges at the intersection of these two key areas of AI. These models can be categorized into either single-stream or dual-stream encoders. We study the differences between these two categories, and show how they can be unified under a single theoretical framework. We then conduct controlled experiments to discern the empirical differences between five vision and language BERTs. Our experiments show that training data and hyperparameters are responsible for most of the differences between the reported results, but they also reveal that the embedding layer plays a crucial role in these massive models. ; ISSN:2307-387X
URL: https://hdl.handle.net/20.500.11850/517633
https://doi.org/10.3929/ethz-b-000517633
BASE
Hide details
6
Transformer-based Lexically Constrained Headline Generation ...
BASE
Show details
7
It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information ...
BASE
Show details
8
It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information
In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020)
BASE
Show details
9
The mechanism of additive composition [<Journal>]
Tian, Ran [Verfasser]; Okazaki, Naoaki [Sonstige]; Inui, Kentaro [Sonstige]
DNB Subject Category Language
Show details
10
Other Topics You May Also Agree or Disagree: Modeling Inter-Topic Preferences using Tweets and Matrix Factorization ...
BASE
Show details
11
A preference learning approach to sentence ordering for multi-document summarization
In: Information sciences. - New York, NY : Elsevier Science Inc. 217 (2012), 78-95
OLC Linguistik
Show details

Catalogues
0
0
1
0
1
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
9
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern